Machine learning (ML) has already made significant impacts on our daily life. From hand-written digit recognition, spam filtering to ranking search results, machine learning techniques help us build intelligent systems more easily and make computers seem smarter. Nevertheless, current ML techniques support limited set of supervision protocols, making it difficult to transfer human knowledge to machines efficiently without labeling examples explicitly. However, structured tasks, which involve many interdependent decisions for a given example, are expensive to label. Given that many important tasks in natural language processing and information extraction are structured tasks, it is important to develop learning frameworks that can use knowledge resources and other sources of indirect supervision in addition to labeled examples for the current task.
I will present my work on reducing the labeling cost of structured tasks using indirect supervision protocols and knowledge based constraints. We have developed advanced machine learning algorithms that take advantage of indirect supervision along with existing labeled data. Indirect supervision can come in the form of constraints or of weaker, easy to get, supervision signals. Our proposed learning frameworks can handle both structured output problems and problems with latent structures. We demonstrate the effectiveness of our indirect supervision frameworks on several important natural language processing tasks.